scary ai
Netanyahu warns of potential 'eruption of AI-driven wars' that could lead to 'unimaginable' consequences
Netanyahu addressed the United Nations General Assembly in New York last week. Israel Prime Minister Benjamin Netanyahu warned the world is on the cusp of an artificial intelligence revolution that could launch nations into prosperous times or lead to all-out destruction fueled by devastating high-tech wars. "The AI revolution is progressing at lightning speed," Netanyahu said during his U.N. General Assembly speech last week. "It took centuries for humanity to adapt to the agricultural revolution. It took decades to adapt to the industrial revolution. We may have but a few years to adapt to the AI revolution."
- North America > United States > New York (0.25)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.16)
- Asia > Middle East > Saudi Arabia (0.05)
- (2 more...)
- Government > Military (0.50)
- Government > Regional Government (0.38)
AI could grow so powerful it replaces experienced professionals within 10 years, Sam Altman warns
OpenAI CEO Sam Altman took questions from reporters after his congressional hearing, including defining "scary AI." Artificial intelligence could become so powerful that it replaces professional experts "in most domains" within the next decade, OpenAI CEO Sam Altman warned. Altman, the chief of the AI lab behind popular platforms such as ChatGPT, published a blog post this week with two other OpenAI leaders, Greg Brockman and Ilya Sutskever, warning that "we must mitigate the risks of today's AI technology. "It's conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations," reads the post, which was published on OpenAI's website. "In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the post continued. OPENAI CEO SAM ALTMAN REVEALS WHAT HE THINKS IS'SCARY' ABOUT AI Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on May 16, 2023. Altman and his fellow OpenAI executives compared artificial intelligence to nuclear energy and synthetic biology, arguing that regulations must be handled with "special treatment and coordination" to be effective. They suggested that a version of the International Atomic Energy Agency will be needed to regulate the "superintelligence" technology. "Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc," they wrote. Altman appeared before Congress this month to discuss how to regulate artificial intelligence, saying he welcomes U.S. leaders to craft such rules. Following the hearing, Altman provided examples of "scary AI" to Fox News Digital, which included systems that could design "novel biological pathogens." "An AI that could hack into computer systems," he said. "I think these are all scary.
- North America > United States > District of Columbia > Washington (0.25)
- North America > Canada > Ontario > Toronto (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI chief Altman described what 'scary' AI means to him, but ChatGPT has its own examples
OpenAI CEO Sam Altman, the artificial intelligence lab behind ChatGPT, took questions from reporters after his congressional hearing, including his definition of "scary AI." OpenAI CEO Sam Altman testified before Congress in Washington, D.C., this week about regulating artificial intelligence as well as his personal fears over the tech and what "scary" AI systems means to him. Fox News Digital asked OpenAI's wildly popular chatbot, ChatGPT, to also weigh in on examples of "scary" artificial intelligence systems, and it reported six hypothetical instances of how AI could become weaponized or have potentially harmful impacts on society. When asked by Fox News Digital on Tuesday after his testimony before a Senate Judiciary subcommittee, Altman gave examples of "scary AI" that included systems that could design "novel biological pathogens." "An AI that could hack into computer systems," he continued. "I think these are all scary. These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important."
- North America > United States > District of Columbia > Washington (0.25)
- North America > United States > New York (0.05)
- North America > United States > California (0.05)
- Media > News (1.00)
- Information Technology > Security & Privacy (0.99)
- Law > Statutes (0.71)
- Government > Regional Government > North America Government > United States Government (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Anti-'Terminator': AI not a 'creature' working toward self-awareness, OpenAI CEO Altman says
OpenAI CEO Sam Altman took questions from reporters following his congressional hearing and defined "scary AI." OpenAI CEO Sam Altman said people should not try to "anthropomorphize" artificial intelligence and should discuss the powerful tech systems in the context of it being a "tool" and not a "creature." "I think there's a huge amount of speculation on that question," Altman told reporters Tuesday on Capitol Hill when asked how quickly AI could become "self-aware" if Congress does not regulate the technology. The line of questioning had echoes of the "Terminator" film series, in which AI brings about the apocalypse on the day it becomes "self-aware." "I think it's very important that we keep talking about this as a tool, not a creature, because it's so tempting to anthropomorphize it," he added. "I totally understand where the anxiety comes from. I think it's the wrong frame … the wrong way to think about it."
- North America > United States > California (0.06)
- Europe > Spain > Galicia > Madrid (0.06)
- Government (1.00)
- Law > Statutes (0.52)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.92)
OpenAI CEO Sam Altman reveals what he thinks is 'scary' about AI
OpenAI CEO Sam Altman, the artificial intelligence lab behind ChatGPT, took questions from reporters following his congressional hearing, including defining "scary AI." OpenAI CEO Sam Altman outlined examples of "scary AI" to Fox News Digital after he served as a witness for a Senate subcommittee hearing on potential regulations on artificial intelligence. "Sure," Altman said when asked by Fox News Digital to provide an example of "scary AI." "An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary." "These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important." Altman appeared before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on Tuesday morning to speak with lawmakers about how to best regulate the technology.
- Media > News (0.80)
- Law > Statutes (0.78)
- Government > Regional Government > North America Government > United States Government (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.94)
Scary AI Is More "Fantasia" Than "Terminator" - Issue 58: Self
When Nate Soares psychoanalyzes himself, he sounds less Freudian than Spockian. As a boy, he'd see people acting in ways he never would "unless I was acting maliciously," the former Google software engineer, who now heads the non-profit Machine Intelligence Research Institute, reflected in a blog post last year. "I would automatically, on a gut level, assume that the other person must be malicious." It's a habit anyone who's read or heard David Foster Wallace's "This is Water" speech will recognize. Later Soares realized this folly when his "models of other people" became "sufficiently diverse"--which isn't to say they're foolproof, he wrote in the same post.